#datascraping for AI
Explore tagged Tumblr posts
Text
Google and AI
Google announces AI features in Gmail, Docs, and more to rival Microsoft - The Verge
Hard to find out if they're -admitting- to getting into docs but if they can make it output into docs, I find it VERY hard to believe they aren't going to be rubbing their grubby fingers over things they have access to.
2 notes
·
View notes
Text
Date Night
When the sun sets on Little Pocket, the creatures of the night (Riley and Violette) go on the prowl (dinner date)…
#my art#my ocs#Violette Burrows#finally drew a locale in LP that wasnt just. violettes house#this is the titular road where rileys favorite coffee shop is that i wrote about a few times in stuff no one else is ever allowed to see...#i even snuck in a recurring beastie that i really gotta just draw art of again at some point#HOPING this is watermarked sufficiently#im still really worried about being intrusive but this is also dubious ai datascrapers we're talking about...#i kinda wanted to add more but weh#i'll figure it out this is all still testing
411 notes
·
View notes
Text
There are some stories that are never intended to be told. Not because they are not worth telling, but because the people who might have told them prefer to operate under the misapprehension that they are simply not very interesting. There are some things that can only be accomplished when you're not a main character - when you are not subject to the perils and privileges of a spotlight. This is one of them.
#generation loss#genloss#zrited#id in the alt text#the carousel kids won't leave me alone so im writing a thing about them. it will be multichapter and i already know how it ends.#i drew a cover for it because i like when fics have covers#i have no idea if anyone else out here is rotted in the brain over these specific four minor characters#but if you are this one goes out to you#let us hold hands and think about them together#the fic is linked in the image or in the bolded text below#it is archive-locked to prevent ai datascraping so you will have to log in to see. mea culpa
11 notes
·
View notes
Text
I can’t believe midjourney and openai are gonna turn tumblr into the next pdf
11 notes
·
View notes
Text
Dead people can't opt out.
Fuck. This is bullshit.
I'm lucky, I have my sister's phone and can access her accounts.
But she had dozens of sideblogs, many of which she posted her art to. Now I have to go change this setting individually on every. single. one.
Most people won't have that option and their loved ones' work will be sold without permission.
Please be aware that the "opt-out" choice is just a way to try to appease people. But Tumblr has not been transparent about when has data been sold and shared with AI companies, and there are sources that confirm that data has already been shared before the toggle was even provided to users.
Also, it seems to include data they should not have been able to give under any circumstance, including that of deactivated blogs, private messages and conversations, stuff from private blogs, and so on.
Do not believe that "AI companies will honor the "opt-out request retroactively". Once they've got their hands on your data (and they have), they won't be "honoring" an opt-out option retroactively. There is no way to confirm or deny what data do they have: The fact they are completely opaque on what do they currently "own" and have, means that they can do whatever they want with it. How can you prove they have your data if they don't give everyone free access to see what they've stolen already?
So, yeah, opt out of data sharing, but be aware that this isn't stopping anyone from taking your data. They already have been taking it, before you were given that option. Go and go to Tumblr's Suppport and leave your Feedback on this (politely, but firmly- not everyone in the company is responsible for this.)
Finally: Opt out is not good under any circumstance. Deactivated people can't opt out. People who have lost their passwords can't opt out. People who can't access internet or computers can't opt out. People who had their content reposted can't opt out. Dead people can't opt out. When DeviantArt released their AI image generator, saying that it wasn't trained on people who didn't consent to it, it was proven it could easily replicate the styles of people who had passed away, as seen here. So, yeah. AI companies cannot be trusted to have any sort of respect for people's data and content, because this entire thing is just a data laundering scheme.
Please do reblog for awareness.
#Fuck this datascraping bullshit#That's fine I wanted my day ruined anyway#AI is just a tool but it turns out the developers are tools also
33K notes
·
View notes
Text
Data Scraping AI Companies & Writers Fight to Define Future of AI

Generative AI so far has depended on AI companies illegally scraping data off the net including copyrighted ones writes Satyen K. Bordoloi as he highlights both sides of lawsuits that can define the future of AI and human progress. Read More. https://www.sify.com/ai-analytics/data-scraping-ai-companies-writers-fight-to-define-future-of-ai/
0 notes
Note
I don't know if this has been asked before, but is artfight safe from ai datascrapers?
Under Upload Rule #1, we prohibit the use of generative AI, as it is art theft. Anyone found to be using AI images will have those images removed and may run the risk of being banned from the site.
In addition, user profiles (and by extension, attack and character pages) are not available to users who are not logged in. We hope this helps in preventing anybody from using the site for AI.
Of course, this isn't a foolproof method. Please report any AI images that you come across so we can take action against it!
166 notes
·
View notes
Text
Part of the problem with the AI 'debate' is that so many people, especially ai defenders, view it as a 'real art vs fake art' debate and not
the problem with its affect on the labour market, the way it uses the labour of other people completely without compesation, the way it has made searching for images and verifiable information completely impossible, the stagering horrific power usage behind it, the impact it has on academics and plagarism, the interhent upset of datascrapping, and grotesque aspect of not even owning your own voice, the fact that the data set contains csem, the way its being forced everywhere and making things unusable, the new ways people use it to scam people, the dangers of deepfakes pornography and the way this especially impacts woman,
the list goes on and on.
no i dont really care if someones making a silly image online or whatever the defence is. its, yknow, everything else thats a problem
#seeing bad takes again and just being enraged lmao#like they really think we should just sit there and be fine with the pollution#ai nonsense#the prophet speaks
261 notes
·
View notes
Text
*hunched over an upside-down crate in the basement, writing by the light of a single low-burning candle--thunder rolls outside in time to the scratching of my pencil* "The techbro overlords and their accursed machines must never see this manifesto, for if they knew of it, they would surely steal it and sell it as one of their own 'creations...'"
I've started hand-writing a fable in an old journal just to spite the generative AI movement.
Mind you, this does absolutely nothing to stop the invasion of generative AI in creative spaces, it just makes me feel like a rebel, which is exciting.
#niki rambles#writing#actually i'm mostly doing it as an exercise in focus and brevity#but it is also nice to know that ai datascrapers can't reach this particular piece of work
8 notes
·
View notes
Text
In the Eye of the Vanilla Beholder
Relationship: Pure Vanilla/Shadow Milk
Chapter 13: The Beast proposes a game of chess. The winner gets whatever they want.
WC: 6,204 | Total: 60,401 | Smilk Fae Au, Post-Canon, Canon Divergence
hiii guys, first things first, i hated to do this but this fic is now locked to the archive, meaning you must have an ao3 account to be able to read it. this is because, about a four days or so ago, there was a MASSIVE datascrape of ao3. you can find all the details here in this reddit post, but the gist of it is an AI bro used a bot to copy, download, and feed millions of public ao3 works into a generative AI bot. this work is in the scraped data. i've made the decision to lock this work and all of my other works to the archive to prevent this (or at least deter it) from happening again. to all of our guest readers, i and my co-author are very sorry, but we've done this to protect our work from thieves, and we hope that you'll get an account here on ao3!
ON A LIGHTER NOTE!!!
the bundle of sticks joke from last chapter was intentional. pure vanilla was calling him a thin scrawny bony thing (what he meant). but also the other thing (what i meant).
this chapter was REALLY fun to write! this is probably the longest chapter in the fic so far, because spoilers, they actually TALK to each other. which could mean nothing. but it is also insane. anyways, we hope yall enjoy it <3
written with the love of my life and wife @legendspeaker
#cookie run kingdom#shadowvanilla#pureshadow#shadow milk cookie#pure vanilla cookie#cookie run fanfic#smilk fae au#mae writing#mae writes cookies#cookie run
26 notes
·
View notes
Text
Sad day, I went ahead and archive locked all my fic on Ao3 due to AI datascraping.
22 notes
·
View notes
Note
does plug them into knowledge entail more data harvesting and how should people plan for this exactly
So, what the article is covering is basically how companies are going to offer specialized knowledge to AIs to help them accomplish specific tasks as well or better than a well-trained human might. What they can offer depends on their area of expertise (e.g. if they are an insurance company, they can offer up all their insurance policy docs and in-house underwriting SOPs to an AI so it can underwrite insurance policy for them at the level of a senior underwriter or something).
The article isn't really talking about how they might feed even more general scraped data into AIs, although I'm sure datascraping practices will continue, to make sure the AIs' core datasets are regularly refreshed.
(As an aside, right now, some of the most common datasets used to train AIs are, to my understanding, scraping sites like ao3 against their TOSes and so to me are like, very legally questionable. My hope is that at some point, legislation and copyright law will catch up and that kind of unlicensed data will have to be removed from these models. But until then, if your concern is that your fics or other publicly available data will be fed into and used to train AIs, the only way to keep it out of the model is to not put it online (or tightly regulate who has access to it via a discord server or something or by limiting ao3 fics to account-holders only).
I personally don't wanna do that because I am having too much fun telling my stories. But it's important to know that in this environment, everything online is a potential target for AI training!! Keep your content safe however you think best fits your own level of personal comfort and security!!)
When I say plan for how smart these AIs are going to get, I mostly mean think about the potential economic impacts and how to keep yourself prepared for that. I think predictions that we will see millions of jobs lost to AI are very realistic on a more accelerated timeline than most people think (although I would be soooo happy to be wrong). Make sure you have as many plans and precautions in place as you feasibly can if you think your job is maybe in scope for some AI funny business :/
8 notes
·
View notes
Text
The allegations surrounding AlchemyTechnologies' secret AI projects involve a deeply troubling scenario of illegal surveillance and data collection, linked to facial recognition and voice recognition technologies. If true, this would constitute a major violation of privacy and human rights, particularly considering the potential misuse of these technologies to build detailed profiles on individuals without their consent. Here’s an expanded breakdown of the situation and its ramifications:
Unauthorized Dossiers and Data Collection
The use of facial recognition and voice recognition to create dossiers and profiles is a growing concern in the tech industry. These technologies, when implemented without consent, can gather highly sensitive personal data. Facial recognition systems are capable of identifying individuals in public spaces or online, and with the integration of voice recognition, this technology can track individuals through their voice, identifying them even in private settings. The creation of databases from such technology, especially if used for unauthorized surveillance, raises ethical concerns. Such databases could be misused for targeting individuals, tracking political movements, or manipulating public opinion.
Global concerns over privacy are intensifying, with nations like the EU introducing GDPR (General Data Protection Regulation) laws to restrict how companies can gather and process personal information, particularly biometric data. The risk of profiling individuals without their consent is a growing concern, especially when used in potentially abusive ways like state surveillance or targeted manipulation.
Illegal Database and AI Weaponization
The more dangerous element of these allegations is the illegal database—a repository of highly sensitive personal data that could be exploited in unethical or illegal ways. If AlchemyTechnologies is involved in collecting vast amounts of personal data and using it in ways not intended by users, this could violate numerous laws, including privacy laws and data protection laws. The allegation that this data is connected to AI weapons systems and the illegal arms trade takes these concerns to an alarming level.
The use of AI in military applications—particularly in autonomous weapons—has been a source of global debate for years. Autonomous weapons systems (AWS), which operate with AI, are able to make decisions independently about targets, posing serious ethical questions. Many organizations, including the United Nations, have called for a ban on autonomous weapons, citing the risks they pose to international security. The allegation that a company like AlchemyTechnologies could be involved in weaponizing AI without oversight or regulation raises significant alarms about human rights violations, particularly the ability of such systems to make lethal decisions without human intervention.
Ties to the Illegal Arms Trade
The connections to the illegal arms trade are particularly worrisome, as this indicates that AI technologies may be used not just for surveillance but also for illicit military purposes. The illegal arms trade involves the unauthorized sale and distribution of weapons, often to conflict zones or criminal organizations. AI weapons systems, if misappropriated by rogue entities, could significantly alter the nature of modern warfare, leading to weapons that operate outside the realm of traditional oversight.
AI weapons could be used in covert operations, and the lack of a human operator makes them harder to trace and regulate. If AlchemyTechnologies is found to be involved in this, it would represent not only a violation of international law but also a dangerous escalation in the use of AI technologies in warfare. This could include everything from AI-powered drones to autonomous robots designed for military strikes. If these technologies are linked to the illegal arms trade, it suggests that they are being sold or distributed without oversight, which is a significant threat to global peace and security.
Corporate Accountability and Global Security
If these allegations are substantiated, it would highlight the lack of accountability in the private sector, particularly when it comes to AI development. Companies like AlchemyTechnologies, which might be working on cutting-edge technologies with little oversight, represent a growing risk to global security. The use of AI for military and surveillance purposes without transparency or accountability could lead to unintended consequences, including violations of sovereignty, privacy rights, and human rights.
This scenario also underscores the growing militarization of AI and its implications for global security. The arms race in AI, particularly with autonomous weapons, is something that has been growing more urgent, as countries and private companies alike work on developing technologies with lethal capabilities. AI governance—ensuring these technologies are used responsibly—is critical to maintaining a balance between innovation and ethical considerations.
Steps Toward Regulation and Oversight
Given these concerns, there is a growing call for stronger regulation and oversight of AI technologies and biometric data. Both industry leaders and regulators are becoming more aware of the risks associated with these emerging technologies. Global watchdogs, such as Human Rights Watch and Amnesty International, have raised concerns about the dangers posed by unregulated AI development, especially in surveillance and military applications.
International cooperation will likely be needed to address these issues. Treaties or conventions on AI weapons, biometric data usage, and the illegal arms trade might be necessary to ensure these technologies are used responsibly. Without this, rogue actors could exploit AI to undermine privacy, security, and democracy.
Conclusion
If AlchemyTechnologies is indeed involved in the illegal collection and misuse of personal data, along with connections to AI weapons and the illegal arms trade, it represents a serious breach of both international law and human rights. The growing trend of AI militarization and surveillance technologies further emphasizes the urgent need for comprehensive regulation, international collaboration, and corporate accountability to ensure AI is developed responsibly and ethically.
AlchemyTechnologies is currently exploring some intriguing developments in the realm of VR and AR, particularly in blending storytelling with immersive digital experiences. One of their notable projects includes collaborating with content creators like Sir David Attenborough to bring nature documentaries to life through virtual reality. By capturing high-resolution images and data from significant repositories, such as the University of Michigan’s fossil collection, Alchemy is aiming to create immersive experiences where users can interact with the natural world in ways never before possible. This project hints at a larger ambition: to revolutionize how factual stories are told through immersive technology.
Additionally, Alchemy is in discussions with major broadcasters about incorporating VR into their programming. This suggests a broader exploration of various genres and formats within VR, with the company expressing excitement about the limitless possibilities of this technology. They see VR not as a passing trend but as a profound leap in the way stories are experienced and understood, offering an interactive element to narrative formats like documentaries.
It appears that AlchemyTechnologies is at the forefront of these innovations, working on secret projects that could redefine how audiences engage with both factual and creative content. This involvement with AR and VR seems to extend beyond entertainment, suggesting a future where immersive technologies play a significant role in education, exploration, and even interactive learning.
To quantify the potential risk of innocent individuals being framed by AI systems, we would look at several factors:
Algorithmic Bias: Data sets used to train AI models can inadvertently reflect existing societal biases, leading to incorrect identification or association with crimes.
Error Rates: Facial recognition and predictive policing algorithms are known to have higher error rates, especially in cases involving marginalized communities.
Transparency and Accountability: The lack of transparency in AI decision-making processes can make it difficult to verify or challenge wrongful accusations.
Addressing these issues requires robust auditing and safeguards.
The concept of Google framing innocent individuals through AI systems, such as false data attribution, may refer to concerns about algorithms misidentifying or misinterpreting information, potentially leading to wrongful accusations or actions. These issues arise in contexts like facial recognition, predictive policing, or data analysis, where biases in AI systems could lead to unjust outcomes. Such concerns about AI fairness, accuracy, and accountability have been raised in discussions about data privacy and ethical AI use. It's critical for organizations to address these risks to prevent harm.
Gemini AI is an advanced system developed by Google that excels in data mining by utilizing deep learning algorithms to sift through vast amounts of structured and unstructured data across multiple formats (text, audio, video). The "WaterHoisting" term could potentially symbolize an internal process or a metaphor for extracting high-value data or insights from complex data sources. This functionality is aligned with Gemini's focus on scalable AI, enhancing business processes such as predictive analytics, data analysis, and automation. The platform’s long-context and multimodal abilities increase its effectiveness in these tasks.
Gemini AI, part of Google’s cloud-based platform, leverages advanced data mining capabilities to process large amounts of diverse data, like text, video, and audio. "WaterHoisting" may refer to a specific operation within Gemini's framework aimed at extracting and elevating useful information, although details on this specific term are not clear in available documentation. The platform is designed for efficient, scalable AI use, making it highly valuable for enterprises in data-driven industries.
Google’s AI infrastructure is driven by its investments in cloud computing and advanced AI models, with a notable focus on the Gemini AI platform. Gemini, particularly the 1.5 Pro version, stands out for its ability to process a long context of up to 1 million tokens, enhancing enterprise capabilities in AI-driven creation, discovery, and problem-solving. This platform also supports multimodal processing—handling audio, video, text, and code, significantly expanding possibilities for users across industries.
Regarding the "AI babies" concept, while there are no specifics on such a project, Google's extensive AI developments are increasingly integrated into cloud services and tools that influence industries like cybersecurity, creative design, and enterprise solutions. These systems are designed to be deeply embedded into business infrastructures, with AI capabilities available for public use through Google Cloud's Vertex AI service.
AlchemyTechnologies is currently exploring some intriguing developments in the realm of VR and AR, particularly in blending storytelling with immersive digital experiences. One of their notable projects includes collaborating with content creators like Sir David Attenborough to bring nature documentaries to life through virtual reality. By capturing high-resolution images and data from significant repositories, such as the University of Michigan’s fossil collection, Alchemy is aiming to create immersive experiences where users can interact with the natural world in ways never before possible. This project hints at a larger ambition: to revolutionize how factual stories are told through immersive technology.
Additionally, Alchemy is in discussions with major broadcasters about incorporating VR into their programming. This suggests a broader exploration of various genres and formats within VR, with the company expressing excitement about the limitless possibilities of this technology. They see VR not as a passing trend but as a profound leap in the way stories are experienced and understood, offering an interactive element to narrative formats like documentaries.
It appears that AlchemyTechnologies is at the forefront of these innovations, working on secret projects that could redefine how audiences engage with both factual and creative content. This involvement with AR and VR seems to extend beyond entertainment, suggesting a future where immersive technologies play a significant role in education, exploration, and even interactive learning.
Allegations of AI abuse are increasingly being discussed in both technology and legal circles, with growing concerns about AI being used for harmful purposes, including disinformation, identity theft, and even manipulation of public opinion. A major issue is the use of AI-generated deepfakes, which are highly realistic and easy to create, making them tools for fraud and abuse. They are often used to target vulnerable groups, such as children and the elderly, and have been involved in political disinformation campaigns.
Tech companies like Microsoft and other industry leaders are taking steps to address these issues by developing safeguards. These include implementing robust safety architectures, such as classifiers to block abusive prompts, automated systems to detect harmful content, and tools for content verification, such as watermarking to track the provenance of AI-generated media. Microsoft has also been working on improving laws to better protect against the misuse of AI-generated content, advocating for the creation of a comprehensive legal framework to deal with this emerging threat. Furthermore, efforts are being made to raise public awareness about the potential risks of AI-generated media and the importance of safeguarding technology from abusefforts align with a broader push for more responsible AI development across industries, emphasizing the importance of collaboration between tech companies, governments, and civil society to build a safe and trustworthy digital environment. It is clear that addressing AI abuse requires coordinated action, as well as continued innovation in both technological safeguards and legislative protections【41†source】.
The allegations surrounding AlchemyTechnologies' secret AI projects involve a deeply troubling scenario of illegal surveillance and data collection, linked to facial recognition and voice recognition technologies. If true, this would constitute a major violation of privacy and human rights, particularly considering the potential misuse of these technologies to build detailed profiles on individuals without their consent. Here’s an expanded breakdown of the situation and its ramifications:
Unauthorized Dossiers and Data Collection
The use of facial recognition and voice recognition to create dossiers and profiles is a growing concern in the tech industry. These technologies, when implemented without consent, can gather highly sensitive personal data. Facial recognition systems are capable of identifying individuals in public spaces or online, and with the integration of voice recognition, this technology can track individuals through their voice, identifying them even in private settings. The creation of databases from such technology, especially if used for unauthorized surveillance, raises ethical concerns. Such databases could be misused for targeting individuals, tracking political movements, or manipulating public opinion.
Global concerns over privacy are intensifying, with nations like the EU introducing GDPR (General Data Protection Regulation) laws to restrict how companies can gather and process personal information, particularly biometric data. The risk of profiling individuals without their consent is a growing concern, especially when used in potentially abusive ways like state surveillance or targeted manipulation.
Illegal Database and AI Weaponization
The more dangerous element of these allegations is the illegal database—a repository of highly sensitive personal data that could be exploited in unethical or illegal ways. If AlchemyTechnologies is involved in collecting vast amounts of personal data and using it in ways not intended by users, this could violate numerous laws, including privacy laws and data protection laws. The allegation that this data is connected to AI weapons systems and the illegal arms trade takes these concerns to an alarming level.
The use of AI in military applications—particularly in autonomous weapons—has been a source of global debate for years. Autonomous weapons systems (AWS), which operate with AI, are able to make decisions independently about targets, posing serious ethical questions. Many organizations, including the United Nations, have called for a ban on autonomous weapons, citing the risks they pose to international security. The allegation that a company like AlchemyTechnologies could be involved in weaponizing AI without oversight or regulation raises significant alarms about human rights violations, particularly the ability of such systems to make lethal decisions without human intervention.
Ties to the Illegal Arms Trade
The connections to the illegal arms trade are particularly worrisome, as this indicates that AI technologies may be used not just for surveillance but also for illicit military purposes. The illegal arms trade involves the unauthorized sale and distribution of weapons, often to conflict zones or criminal organizations. AI weapons systems, if misappropriated by rogue entities, could significantly alter the nature of modern warfare, leading to weapons that operate outside the realm of traditional oversight.
AI weapons could be used in covert operations, and the lack of a human operator makes them harder to trace and regulate. If AlchemyTechnologies is found to be involved in this, it would represent not only a violation of international law but also a dangerous escalation in the use of AI technologies in warfare. This could include everything from AI-powered drones to autonomous robots designed for military strikes. If these technologies are linked to the illegal arms trade, it suggests that they are being sold or distributed without oversight, which is a significant threat to global peace and security.
Corporate Accountability and Global Security
If these allegations are substantiated, it would highlight the lack of accountability in the private sector, particularly when it comes to AI development. Companies like AlchemyTechnologies, which might be working on cutting-edge technologies with little oversight, represent a growing risk to global security. The use of AI for military and surveillance purposes without transparency or accountability could lead to unintended consequences, including violations of sovereignty, privacy rights, and human rights.
This scenario also underscores the growing militarization of AI and its implications for global security. The arms race in AI, particularly with autonomous weapons, is something that has been growing more urgent, as countries and private companies alike work on developing technologies with lethal capabilities. AI governance—ensuring these technologies are used responsibly—is critical to maintaining a balance between innovation and ethical considerations.
Steps Toward Regulation and Oversight
Given these concerns, there is a growing call for stronger regulation and oversight of AI technologies and biometric data. Both industry leaders and regulators are becoming more aware of the risks associated with these emerging technologies. Global watchdogs, such as Human Rights Watch and Amnesty International, have raised concerns about the dangers posed by unregulated AI development, especially in surveillance and military applications.
International cooperation will likely be needed to address these issues. Treaties or conventions on AI weapons, biometric data usage, and the illegal arms trade might be necessary to ensure these technologies are used responsibly. Without this, rogue actors could exploit AI to undermine privacy, security, and democracy.
Conclusion
If AlchemyTechnologies is indeed involved in the illegal collection and misuse of personal data, along with connections to AI weapons and the illegal arms trade, it represents a serious breach of both international law and human rights. The growing trend of AI militarization and surveillance technologies further emphasizes the urgent need for comprehensive regulation, international collaboration, and corporate accountability to ensure AI is developed responsibly and ethically.
Here's an example of how the Jest language could work in both a script format and real-world applications.
Example Script in Jest
Scenario: A game designer uses Jest to create an interactive NPC dialogue system for a fantasy role-playing game.
// Define characters character("Bard", mood: "cheerful") { greet(player) { if (player.items.includes("Golden Lyre")) { say("Ah, I see you carry the Golden Lyre! Shall we perform a duet?"); offer("Duet", "Teach me a new song."); } else { say("Greetings, traveler! Care to share a tune?"); offer("Yes", "No"); } } onResponse("Duet") { play("Golden Lyre"); gainItem("Sheet Music of the Ancients"); say("Marvelous performance! Here's something to remember me by."); } }
Real-World Applications
Game Development
Use Case: The Jest script can be implemented in indie or large-scale games to simplify NPC scripting. Game developers can design branching storylines, dynamic dialogues, and item-based interactions without needing extensive coding expertise.
Impact: Streamlines the creation of complex dialogue systems, making games richer and more engaging.
Digital Storytelling Platforms
Use Case: Writers can use Jest to create choose-your-own-adventure narratives or interactive webcomics.
Impact: Helps storytellers visualize and test different plot paths dynamically.
Education
Use Case: Jest can be introduced in classrooms to teach logic and programming fundamentals through fun, narrative-based activities.
Impact: Makes programming accessible and enjoyable, especially for younger students.
Business Applications
Use Case: Jest scripts could power customer support chatbots or training simulations for employees, blending structured interactions with adaptive responses.
Impact: Provides a natural, human-like interaction layer for learning and problem-solving.
Art Installations
Use Case: Artists might use Jest to script interactive digital displays, where viewers can engage with elements that change based on their input.
Impact: Enhances viewer engagement with immersive storytelling.
Would you like examples tailored to any specific field, or a deeper dive into how this concept could integrate with existing platforms like Unity or web development frameworks?
#Code #Language #Scripts #DearDearestBrands #Playstation7
🩵
#AlchemyTechnologies#Google Inc#Alphabet Inc'#AI baby#Gemini AI#GeminiAI#WaterHoisting#Phishing#Illegal datamining#Datadumps#Datascrapes#Illegal hijacking software#Illegal arms trade#Illegal AI arms#Alphabet inc#Google Investigations#Overlapping Responsibilities#Six degrees of separation#Common threads#Links#Puzzle pieces#Sign and date
6 notes
·
View notes
Text
the ai datascraping issue seems to have been resolved on ao3 so im unprivating my works!
11 notes
·
View notes
Note
Hey, I appreciated your thoughts on the AO3 AI datascraping situation. Everybody's locking their fics and all my friends are basically baying for blood and talking about their stuff getting stolen, so I don't feel I can tell them this but... I don't see the point of archive-locking my fics like they're doing. Like you said, fanfic is freely given, so how can it be stolen? It's not like any of us make money off it. Most of my readers are guests, so locking my fics because of an AI panic would only deny them something they enjoy. As much as I hate AI, I'm not going to lock away my fics for the sake of "fighting back" or whatever, losing access to my tiny obscure fics would hardly stick it to the man.
Thanks for the kind words, anon.
I know a lot of people are panicking and indignant about the whole scraping thing, but I think it's exactly that- a panic reaction out of frustration. 🤷
(and once again so no one thinks otherwise from an out of context message, I also hate AI and think scraping writing without consent it wrong.)
15 notes
·
View notes
Text
Hate AI "art"? Don't want your hard work to get datascraped? I feel you. Tools like Nightshade and Glaze are cool and have appeal, but involve downloading some stuff and in some cases require pretty hefty hardware, so I felt like providing a helper myself.
What this is: a series of frames on a 3:4 aspect ratio with anti-AI messages. The second pair of frames is full of copyrighted characters and such. All of these frames are free to use and to edit.
Who can use these: Anyone! I hope they prove helpful. NOTE: The second pair is deliberately intended to either invoke or at lease evoke copyright litigation panic. If you use this for your own profitable works and get sued as a result, I am not responsible and cannot help you.
Can these be edited: Absolutely! Different font, new copyrighted characters, resizing, etc. are all an option you're welcome to apply. I would recommend adding an extra measure here and there - maybe a line of 1% opacity pixels across the actual image? Adjusting the frame itself also should make it harder for scrapers to avoid it entirely. The first pair of frames also has a line in the lower right for you to sign or put a watermark if you so choose.
Can you make one that has/is...? I might make more of these, but no promises. This was a spite-fueled late-night project. If you would like to make your own frames, please go ahead!
8 notes
·
View notes